非平稳来源分离是具有许多不同方法的盲源分离的一个完善的分支。但是,对于这些方法都没有大样本结果可用。为了弥合这一差距,我们开发了NSS-JD的大样本理论,NSS-JD是一种基于块构成协方差矩阵的联合对角线化的非平稳源分离方法。我们在独立高斯非平稳源信号的瞬时线性混合模型以及一组非常通用的假设下工作:除了有界条件外,我们做出的唯一假设是,源表现出有限的依赖性,其方差函数足够差异,足以差异为渐近可分离。在以前的条件下显示,未混合估计器及其在标准平方根速率下的限制高斯分布的一致性显示。模拟实验用于验证理论结果并研究块长度对分离的影响。
translated by 谷歌翻译
我们分析了Bachoc等人的二元搜索算法的累积遗憾。[2022]。
translated by 谷歌翻译
本文通过顺序查询其值来最大程度地限制单变量凸功能$ f $的自然概括。在每个时间段$ t $中,优化器都可以在查询点上投资预算$ b_t $,以获取其选择的$ x_t $,以获取对$ f $ at $ x_t $的模糊评估,其准确性取决于投资预算的金额在$ x_t $中遍布时间。此设置是由目标最小化的动机,这些目标只能通过冗长或昂贵的计算来确定其值。我们设计了一种称为二元搜索的任何无时间参数算法,我们证明了几乎最佳的优化错误保证。作为我们分析的副产品,我们表明,在误差范围中对全球Lipschitz常数的经典依赖性是预算粒度的伪像。最后,我们通过数值模拟说明了我们的理论发现。
translated by 谷歌翻译
样本是否足够丰富,至少在本地确定神经网络的参数?为了回答这个问题,我们通过固定其某些权重的值来介绍给定深层神经网络的新局部参数化。这使我们能够定义本地提升操作员,其倒置是高维空间的平滑歧管的图表。Deep Relu神经网络实现的函数由依赖样本的线性操作员组成局部提升。我们从这种方便的表示中得出了局部可识别性的几何必要条件。查看切线空间,几何条件提供了:1/可识别性的尖锐而可测试的必要条件以及2/可识别局部可识别性的尖锐且可测试的足够条件。可以使用反向传播和矩阵等级计算对条件的有效性进行数值测试。
translated by 谷歌翻译
由于其在输入空间子集上的功能的知识,因此可以根据情况,诅咒或祝福来恢复神经网络的参数权重和偏差的可能性。一方面,恢复参数允许更好的对抗攻击,并且还可以从用于构造网络的数据集中披露敏感信息。另一方面,如果可以恢复网络的参数,它可以保证用户可以解释潜在空间中的特征。它还提供基础,以获得对网络性能的正式保障。因此,表征可以识别其参数的网络以及其参数不能的网络是很重要的。在本文中,我们在深度全连接的前馈recu网络上提供了一组条件,在该馈电中,网络的参数是唯一识别的模型置换和正重型 - 从其实现输入空间的子集。
translated by 谷歌翻译
在本文中,我们提出了一种新的可解释性形式主义,旨在阐明测试集的每个输入变量如何影响机器学习模型的预测。因此,我们根据训练有素的机器学习决策规则提出了一个群体的解释性形式,它们是根据其对输入变量分布的可变性的反应。为了强调每个输入变量的影响,这种形式主义使用信息理论框架,该框架量化了基于熵投影的所有输入输出观测值的影响。因此,这是第一个统一和模型不可知的形式主义,使数据科学家能够解释输入变量之间的依赖性,它们对预测错误的影响以及它们对输出预测的影响。在大型样本案例中提供了熵投影的收敛速率。最重要的是,我们证明,计算框架中的解释具有低算法的复杂性,使其可扩展到现实生活中的大数据集。我们通过解释通过在各种数据集上使用XGBoost,随机森林或深层神经网络分类器(例如成人收入,MNIST,CELEBA,波士顿住房,IRIS以及合成的)上使用的复杂决策规则来说明我们的策略。最终,我们明确了基于单个观察结果的解释性策略石灰和摇摆的差异。可以通过使用自由分布的Python工具箱https://gems-ai.aniti.fr/来复制结果。
translated by 谷歌翻译
For long-term simultaneous planning, localization and mapping (SPLAM), a robot should be able to continuously update its map according to the dynamic changes of the environment and the new areas explored. With limited onboard computation capabilities, a robot should also be able to limit the size of the map used for online localization and mapping. This paper addresses these challenges using a memory management mechanism, which identifies locations that should remain in a Working Memory (WM) for online processing from locations that should be transferred to a Long-Term Memory (LTM). When revisiting previously mapped areas that are in LTM, the mechanism can retrieve these locations and place them back in WM for online SPLAM. The approach is tested on a robot equipped with a short-range laser rangefinder and a RGB-D camera, patrolling autonomously 10.5 km in an indoor environment over 11 sessions while having encountered 139 people.
translated by 谷歌翻译
Traditional approaches to RL have focused on learning decision policies directly from episodic decisions, while slowly and implicitly learning the semantics of compositional representations needed for generalization. While some approaches have been adopted to refine representations via auxiliary self-supervised losses while simultaneously learning decision policies, learning compositional representations from hand-designed and context-independent self-supervised losses (multi-view) still adapts relatively slowly to the real world, which contains many non-IID subspaces requiring rapid distribution shift in both time and spatial attention patterns at varying levels of abstraction. In contrast, supervised language model cascades have shown the flexibility to adapt to many diverse manifolds, and hints of self-learning needed for autonomous task transfer. However, to date, transfer methods for language models like few-shot learning and fine-tuning still require human supervision and transfer learning using self-learning methods has been underexplored. We propose a self-supervised loss policy called contrastive distillation which manifests latent variables with high mutual information with both source and target tasks from weights to tokens. We show how this outperforms common methods of transfer learning and suggests a useful design axis of trading off compute for generalizability for online transfer. Contrastive distillation is improved through sampling from memory and suggests a simple algorithm for more efficiently sampling negative examples for contrastive losses than random sampling.
translated by 谷歌翻译
As of 2022, greenhouse gases (GHG) emissions reporting and auditing are not yet compulsory for all companies and methodologies of measurement and estimation are not unified. We propose a machine learning-based model to estimate scope 1 and scope 2 GHG emissions of companies not reporting them yet. Our model, specifically designed to be transparent and completely adapted to this use case, is able to estimate emissions for a large universe of companies. It shows good out-of-sample global performances as well as good out-of-sample granular performances when evaluating it by sectors, by countries or by revenues buckets. We also compare our results to those of other providers and find our estimates to be more accurate. Thanks to the proposed explainability tools using Shapley values, our model is fully interpretable, the user being able to understand which factors split explain the GHG emissions for each particular company.
translated by 谷歌翻译
In intensively managed forests in Europe, where forests are divided into stands of small size and may show heterogeneity within stands, a high spatial resolution (10 - 20 meters) is arguably needed to capture the differences in canopy height. In this work, we developed a deep learning model based on multi-stream remote sensing measurements to create a high-resolution canopy height map over the "Landes de Gascogne" forest in France, a large maritime pine plantation of 13,000 km$^2$ with flat terrain and intensive management. This area is characterized by even-aged and mono-specific stands, of a typical length of a few hundred meters, harvested every 35 to 50 years. Our deep learning U-Net model uses multi-band images from Sentinel-1 and Sentinel-2 with composite time averages as input to predict tree height derived from GEDI waveforms. The evaluation is performed with external validation data from forest inventory plots and a stereo 3D reconstruction model based on Skysat imagery available at specific locations. We trained seven different U-net models based on a combination of Sentinel-1 and Sentinel-2 bands to evaluate the importance of each instrument in the dominant height retrieval. The model outputs allow us to generate a 10 m resolution canopy height map of the whole "Landes de Gascogne" forest area for 2020 with a mean absolute error of 2.02 m on the Test dataset. The best predictions were obtained using all available satellite layers from Sentinel-1 and Sentinel-2 but using only one satellite source also provided good predictions. For all validation datasets in coniferous forests, our model showed better metrics than previous canopy height models available in the same region.
translated by 谷歌翻译